Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 8609, 2024 04 14.
Artigo em Inglês | MEDLINE | ID: mdl-38615039

RESUMO

With the advent of large language models, evaluating and benchmarking these systems on important AI problems has taken on newfound importance. Such benchmarking typically involves comparing the predictions of a system against human labels (or a single 'ground-truth'). However, much recent work in psychology has suggested that most tasks involving significant human judgment can have non-trivial degrees of noise. In his book, Kahneman suggests that noise may be a much more significant component of inaccuracy compared to bias, which has been studied more extensively in the AI community. This article proposes a detailed noise audit of human-labeled benchmarks in machine commonsense reasoning, an important current area of AI research. We conduct noise audits under two important experimental conditions: one in a smaller-scale but higher-quality labeling setting, and another in a larger-scale, more realistic online crowdsourced setting. Using Kahneman's framework of noise, our results consistently show non-trivial amounts of level, pattern, and system noise, even in the higher-quality setting, with comparable results in the crowdsourced setting. We find that noise can significantly influence the performance estimates that we obtain of commonsense reasoning systems, even if the 'system' is a human; in some cases, by almost 10 percent. Labeling noise also affects performance estimates of systems like ChatGPT by more than 4 percent. Our results suggest that the default practice in the AI community of assuming and using a 'single' ground-truth, even on problems requiring seemingly straightforward human judgment, may warrant empirical and methodological re-visiting.


Assuntos
Benchmarking , Resolução de Problemas , Humanos , Julgamento , Livros , Idioma
2.
Data Brief ; 51: 109666, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37876745

RESUMO

Machine Common Sense Reasoning is the subfield of Artificial Intelligence that aims to enable machines to behave or make decisions similarly to humans in everyday and ordinary situations. To measure progress, benchmarks in the form of question-answering datasets have been developed and published in the community to evaluate machine commonsense models, including large language models. We describe the individual label data produced by six human annotators originally used in computing ground truth for the Theoretically-Grounded Commonsense Reasoning (TG-CSR) benchmark's composing datasets. According to a set of instructions, annotators were provided with spreadsheets containing the original TG-CSR prompts and asked to insert labels in specific spreadsheet cells during annotation sessions. TG-CSR data is organized in JSON files, individual raw label data in a spreadsheet file, and individual normalized label data in JSONL files. The release of individual labels can enable the analysis of the labeling process itself, including studies of noise and consistency across annotators.

3.
J Nurs Educ ; 56(2): 110-114, 2017 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-28141885

RESUMO

BACKGROUND: Patient care problems arise when health care consumers and professionals find health information on the Internet because that information is often inaccurate. To mitigate this problem, nurses can develop Web literacy and share that skill with health care consumers. This study evaluated a Web-literacy intervention for undergraduate nursing students to find reliable Web-based health information. METHOD: A pre- and postsurvey queried undergraduate nursing students in an informatics course; the intervention comprised lecture, in-class practice, and assignments about health Web site evaluation tools. Data were analyzed using Wilcoxon and ANOVA signed-rank tests. RESULTS: Pre-intervention, 75.9% of participants reported using Web sites to obtain health information. Postintervention, 87.9% displayed confidence in using an evaluation tool. Both the ability to critique health Web sites (p = .005) and confidence in finding reliable Internet-based health information (p = .058) increased. CONCLUSION: Web-literacy education guides nursing students to find, evaluate, and use reliable Web sites, which improves their ability to deliver safer patient care. [J Nurs Educ. 2017;56(2):110-114.].


Assuntos
Alfabetização Digital , Capacitação de Usuário de Computador/métodos , Bacharelado em Enfermagem/métodos , Armazenamento e Recuperação da Informação/métodos , Estudantes de Enfermagem , Feminino , Humanos , Masculino , Autorrelato
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...